专利摘要:
A method of determining blending coefficients for respective animations is provided. The method comprises obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the or each determined blending coefficient, the contribution from each of the at least two animations being in accordance with the or each determined blending coefficient.
公开号:EP3683772A1
申请号:EP19213906.1
申请日:2019-12-05
公开日:2020-07-22
发明作者:Fabio Cappello;Oliver Hume
申请人:Sony Interactive Entertainment Inc;
IPC主号:G06T13-00
专利说明:
[0001] The present disclosure relates to a method and system for determining blending coefficients for blending different animations together. Background
[0002] In computer-generated animation, it is common to generate different animations for an animated object. The animated object may be, for example, a humanoid character, and the different animations may correspond to e.g. an idle cycle, a walk cycle, a run cycle, a sparring cycle, and so on. When switching between different animations, the animations are usually blended together so as to create a composite animation that includes a contribution from each animation. This prevents the character from appearing to 'pop' as the object transitions from one type of movement to another. The degree of blending is typically controlled by blending coefficients, which define the relative contribution from each of the animations in animating the character.
[0003] In video games, the manner in which a video game character performs certain motions may be dependent on some in-game variable, such as e.g. the character's health, speed vector, stamina level, etc. Typically, the relationship between in-game variables and animation blending coefficients is defined manually by an animator. This may require an animator to explicitly define, for each type of character and each animation within a given combination, a relationship between the in-game variables and the animations in the respective combination. As will be appreciated, this may become a time-consuming process, especially where there are large number of characters, in-game variables, and animations that are to be blended together. Clearly, defining relationships between in-game variables and blending coefficients in this way is an inefficient use of an animator's time.
[0004] Moreover, manually defining relationships between in-game variables and blending coefficients for combinations of animations is prone to error. This is because the animator is typically unable to foresee all of the possible combinations of in-game variables that may affect the character's movement. It may be, for example, that there is a power up in a video game that greatly increases the running speed of the character, but that the blending coefficient for combining e.g. the idle cycle and running cycle is not sufficiently weighted towards the running cycle. As a result, the character may appear to move with increased speed but in an unrealistic manner (i.e. the speed of leg movement being out-of-sync with the translational movement).
[0005] The present invention seeks to alleviate or at least mitigate against the above-identified problems. Summary
[0006] According to a first aspect disclosed herein, there is provided a method in accordance with claim 1.
[0007] According to a second aspect disclosed herein, there is provided a system in accordance with claim 9. Brief Description of the Drawings
[0008] To assist understanding of the present disclosure and to show how embodiments may be put into effect, reference is made by way of example to the accompanying drawings in which:Figure 1A schematically shows an example of a walking animation cycle; Figure 1B schematically shows an example of a jumping animation cycle; Figure 2 shows an example of a method for determining blending coefficients for blending animations together; and Figure 3 schematically shows an example of a system for determining blending coefficients for respective animations. Detailed Description
[0009] Computer-generated animations are often used in different types of video content, such as the video output during the playing of a video game. In video games, the animation is typically of an articulated object, such as, for example, a humanoid character. The object is usually represented by a surface representation that forms the skin or mesh of the object, and a hierarchical set of interconnected bones forming the three-dimensional skeleton or 'rig' that can be manipulated by an animator so as to animate the mesh. In some cases, the bones may not be hierarchical and may simply allow one portion of the object to be configured differently with respect to another. It will be appreciated that whilst the skeleton or 'rig' may provide sufficient representation and articulation of limbs, head, torso and the like for the desired amount of animation control, it may not directly conform to an anatomical skeleton or part thereof of the represented object.
[0010] In the art of computer animation, creating and manipulating the bones of an object is referred to as rigging, whilst the binding of different parts of the mesh to corresponding bones is called skinning. In rigging, each bone is associated with a three-dimensional transformation (sometimes referred to herein as a 'transform') that defines at least one of a position, scale and orientation of a corresponding bone. The transformation may also define a parent bone to which a child bone is attached. In such cases, the transformation of the child is the product of its parent transform and its own transform. This means, for example, that if the skeleton defines a humanoid character, then movement of the thigh-bone will result in the lower leg being moved too. The transformations may be defined for each successive frame, for each bone, such that the temporal variation in the transformations corresponds to the object performing a particular motion.
[0011] The bones of the object are each associated with a respective portion of the object's visual representation. In the most common case of a polygonal mesh character, the bone is associated with a group of vertices; for example, in a model of a human being, the e.g. 'right upper arm' bone would be associated with the vertices making up the polygons in the model's right upper arm. Portions of the character's skin are normally associated with multiple bones, each one having a scaling factor called a vertex weight or blend weight. This process of skinning the character is typically done by a graphical processing unit (GPU), using a shader program.
[0012] Figure 1A shows an example of a plurality of frames, F1 - F5, of an animated object. In Figure 1A, the object is of a human character 101 and the character has been animated so as to perform a walking motion. The animation shown in Figure 1 may correspond to a loop-animation or a 'walk cycle' defined by an animator for a character in a video game. This may be, for example, the animation that is displayed in response to a player moving their character in a particular direction (in this case, to the right) or as part of a cut-scene. It may be, for example, that an animator creates a different animation for walking in a different direction (e.g. forwards, backwards, right, left).
[0013] In Figure 1A, each successive frame depicts the character at a different respective pose. The pose of the character is defined in terms of bones corresponding to the head, spine, upper arms, lower arms, upper legs, lower legs and feet, and their corresponding transformations. The transforms for each bone are defined for each frame, such that character is depicted as walking in the manner shown, as the frames are successively output for display. It will be appreciated that Figure 1A is a simplified example of an animation, and that in reality the character may be non-human and / or may be formed of different bones or a different number thereof.
[0014] It will be further appreciated that the bones and transforms for a given computer-generated character need not be character specific. For example, it may be that there are multiple characters in a video game, each sharing substantially similar skeletons to which the transforms corresponding to e.g. walking can be applied. However, for some characters, it may be that the physiologies are substantially different, and so the transforms corresponding to e.g. walking need to be re-defined for that character's specific set of bones.
[0015] Generally, an animator will create multiple different animations for a given object. Here, an 'animation' refers to a different type of motion (corresponding to a variation in pose over a plurality of frames). In a simple example, this may include creating a 'jump' animation and a 'walk' animation, for the animated object. The jump animation may correspond to the character launching themselves into the air, as shown in Figure IB. The walk animation may correspond to a walk cycle, as shown in previously in Figure 1A. Again, the changes in pose corresponding to the 'jump' animation may be defined in terms of bones and corresponding transforms, the transforms for each bone changing over successive frames in a manner that corresponds with the character performing a jumping action.
[0016] When rendering the animated object, it is undesirable to suddenly switch from one animation to another, since this will result in the object's movement appearing jerky and unnatural. To avoid this, the different animations are usually blended together so as to create a smooth transition between them. This may involve, for example, assigning each animation to an animation layer, and assigning each layer a weight (i.e. a blending coefficient). The weight defines the extent to which that animation layer will contribute to the final blended result. In the example of Figures 1A and 1B, this may involve blending the 'walking' and 'jumping' animations together, so as to depict the character as jumping from a walking position, as opposed to suddenly launching into a jump.
[0017] As will be appreciated, when combining animations, it may be necessary to adapt the playback speed of each animation, to ensure that corresponding points in each animation remain synchronized. In the example of a humanoid character walking at different speeds, this ensures that the footsteps remain synchronised, despite different animations being used to animate the character. The playback speed of each animation may be adjusted in accordance with the weighting associated with the corresponding animation layer, for example.
[0018] In video games, the contribution from each animation in the blended result will typically depend on some in-game parameter. For example, it may be that a video game character is able to walk, jog and run, and that a corresponding animation exists for each of these motions. The extent to which the character is depicted as walking, jogging and running, may be depend on some in-game parameter, such as the character's 'speed'. For example, the 'speed' may take a value between '0' and '1' and the weighting of the running animation may be proportional to the speed parameter (and conversely, the weighting of the walking animation may be inversely proportional to the speed value). If the speed is closer to '1', then it may be that the running animation is the dominant animation in the blended result. In turn, this may cause the character to appear to be jogging quickly or running. The speed associated with a video game character may depend on an input provided by a user, for example, moving the analogue stick whilst tapping 'x' may correspond to a run, whilst moving just the analogue stick may correspond to a walk.
[0019] Typically, the relationship between an in-game variable and a blending coefficient for a given animation layer is defined manually by an animator. In most cases, a simple linear relationship is used for convenience. As will be appreciated, the definition of a relationship between an in-game variable and a corresponding blending coefficient may be a time-consuming and burdensome process, especially if this needs to be defined for multiple character skeletons and the combinations of animations applied thereto. This problem is further exacerbated if there are a large number of in-game variables to be considered. For example, it may be that a character has multiple different states (e.g. wounded, angry, fighting, tired, etc.) and that a blending coefficient needs to be defined for each of the animation cycles relevant to those states.
[0020] It would therefore be desirable if this burden on animators in defining relationships between in-game variables and blending coefficients could be alleviated in some way. Moreover, it would be desirable if this relationship could be defined in a manner that is less prone to error, and that allows for richer and more flexible mappings between in-game variables and blending coefficients A method for automating such a process will be now be described in relation to Figure 2.
[0021] Figure 2 shows schematically an example of a method of determining blending coefficients for animations. In Figure 2, the method is described in relation to a set of animation and video game data that is to be processed; the training required for executing the method will be described later.
[0022] At a first step 201, animation data is obtained. The animation data comprises at least two different animations that are to be applied to an animated object in a video game (non-limiting examples being the animation sequences corresponding to walking and jumping as illustrated in Figures 1A and 1B). Each animation comprises a plurality of frames, with each frame defining a respective pose of the animated object. Generally, the animated object will be a computer-generated character, such as e.g. a humanoid character, comprising a plurality of bones defining a skeleton of the character. The pose of the computer-generated character may be defined in terms of bones and corresponding transforms, with the transforms defining for each bone, at least one of a position, orientation, scale and parent of that bone, relative to a default position, scale and orientation (such as e.g. the character standing in a T-pose). The transforms may be represented as float values for example, as is the case in the Unreal 4™ game engine.
[0023] It will be appreciated that equivalent forms of pose data are also considered to be within the scope of the invention, such as end-point or key-point definitions of a character pose (e.g. defining the position of a tip of a finger, toe, head etc., and relying on skeletal constraints and optionally the generated pose for a prior frame to generate the current pose), which may be preferable for use with a physics-driven game engine, for example, or video data (as also discussed later herein), from which pose data may be inferred, either from skeletal modelling or similar pose estimation techniques. Hence more generally the animation data comprises, for each of the at least two different animations, a sequence of bone and/or skeleton state data, for example comprising temporal variations of at least some of the bones of a computer-generated character, from which a corresponding sequence of skeletal poses can be derived (for example based directly on some or all of the bone and/or skeleton state data, and/or using some or all of the bone and/or skeleton state data in combination with skeletal constraints and the like).
[0024] The animations may correspond to respective animation cycles, such as e.g. idle, walking, jogging, running, jumping, sparring, falling, etc. that are to be applied to the computer-generated character. As noted above, the different animation cycles may be defined in terms of temporal variations of at least some of the bones of the computer-generated character. For example, the walking motion may be defined in terms of changes in pose of the character's legs, arms, shoulders, etc., over successive frames.
[0025] The animation data may be generated by an animator, during the development of a video game. For example, through the use of a game engine that allows animators to define characters and their animation. This data may be obtained in the sense that is it is received (e.g. as a file) at a computing device, wherein the computing device is configured to process that data (described later in relation to Figure 5).
[0026] At a second step 202, video game data corresponding with the animation data is obtained. The video game data comprises at least one in-game variable defining a state of the animated object in the video game, when the at least two different animations are to be applied to the animated object. The in-game variable may define at least one of: a physical state of the animated object, a speed value or velocity vector associated with the animated object, a context associated with the animated object, etc. Generally, the in-game variable defines any state of the animated object that affects the movement of the animated object.
[0027] The physical state of the animated object may include e.g. an indication of a character's health, or damage that has been taken by the animated object. The physical state may also include e.g. a stamina level, or tiredness associated with the character. In yet further examples, the physical state may provide an indication of the height and / or weight of the character. As will be appreciated, any one or more these may affect how the character moves, and how therefore different animation cycles are to be blended together, so as to represent that movement to varying degrees. In a simple example, it may be that the degree of blending between a walking and running animation cycle is biased towards running, with the amount of biasing being proportional to the character's health.
[0028] The velocity vector of the animated object provides an indication of the speed and direction which the animated object is moving in the video game. Generally, this will be dependent on an input provided by a user, e.g. controller inputs. The velocity vector may also depend on any special items or power-ups that the character has acquired in the game - i.e. it may be that a particular item or power-up greatly increases the speed of the character. Again, the velocity vector of the character may influence how different animation cycles are blended together, and the contribution from each in the final blended result.
[0029] The emotive state provides an indication of an emotive state of the animated object. For example, it may be that the movement of the character depends on a level of anger or sadness, and that the blending between different animations depends on an extent of one or more emotions that the character is experiencing. As an example, Kratos in the game God of War® has an associated 'RAGE' level, and it may be that e.g. the bias towards running over walking is dependent on the Kratos' rage level.
[0030] The context provides an indication of a context within the video game that is being experienced by the animated object. It may be, for example, that the in-game variable indicates that a video game character is in or approaching a combat scenario, and so it may be that the blending between e.g. an idle animation cycle and sparring animation cycle is to be biased towards the sparring animation cycle, based on whether, or how close, the character is to the combat scenario.
[0031] At a third step S203, the animation data and corresponding video game data is input to a machine learning model. The machine learning model is trained to determine, for at least one of the animations in the animation data, and at least one in-game variable in the corresponding video game data, a respective blending coefficient. As mentioned previously, the blending coefficient defines a relative weighting with which at least one of the animations is to be applied to the animated object, in combination with one or more other animations.
[0032] In examples where only two animations are to be blended together, it may be sufficient to determine a blending coefficient for one of the animations. For example, it may be determined, based on the output of the machine learning model, that e.g. the 'walking' animation cycle is to be assigned a blending coefficient of 0.4, and it can therefore be inferred that the other animation, e.g. a running cycle, is to be assigned a blending coefficient of 0.6. Here, assigning a blending coefficient to a given animation may correspond to annotating each frame in that animation with an identifier, indicating the blending coefficient associated with that animation. The identifier may then be used by a game engine, when rendering the blended combination of the two animations, for example.
[0033] In other examples, the machine learning model is trained to determine a blending coefficient for each of the at least two animations in the animation data. For example, it may be that each animation is assigned to an animation layer, and that the machine learning model is trained to determine, for each animation layer, a respective blending coefficient. Again, determining a blending coefficient for each animation may involve annotating each frame of the animation with an identifier indicating the blending coefficient that is to be applied to that animation.
[0034] The machine learning model may comprise at least one of a trained neural network, such as a trained convolutional or recursive neural network (CNN, RNN respectively), a multilayer perceptron (MLP), or restricted Boltzmann machine, for example. Ultimately, any suitable machine learning system may be used. The machine learning model receives as an input, at least two different animations; for example, corresponding single frames or 'focal frames' from the two animations, optionally with one or more frames preceding and/or following one or both of the focal frames at issue to provide motion context, with the input data being the animation data for that frame and/or an abstracted representation of that data, optionally with the amount of any abstraction or any compression being less for the focal frames than for any motion context frames. It will be appreciated that if the number of motion context frames equals the number of frames in the animation sequence then in principle the machine learning model can receive both the entire animation sequences, and the notion of a focal frame becomes irrelevant. Consequently it is envisaged as possible to input individual focal frames, or subsections of the animations, or the entire animation sequence to a respective machine learning model trained using that type of input.
[0035] At a fourth step, S204, one or more blending coefficients for at least one of the animations is determined based on the output of the machine learning model. The or each blending coefficient defines a relative weighting which each animation is to be applied to the animated object. That is, the at least two different animations that are to be applied to the animated object and one or more in-game variables currently associated with the animated object are input to the machine learning model, and in response thereto, it outputs a blending coefficient that is to be applied when blending the at least two animations together.
[0036] Depending on the nature of the input, the output may also be different. If corresponding individual focal frames are used (with or without any motion context information), then a blending coefficient for one or each of the corresponding focal frame pairs may be output, enabling the blending coefficient to change over the sequence of focal frames that are used as the input. Meanwhile if the two entire animation sequences are input then either a single blending coefficient for one or each entire animation sequence may be generated, or multiple blending coefficients may be output. This latter option however either requires that the machine learning model is responsive to the number of frames in a given animation sequence, or that the output multiple blending coefficients are then fitted to the actual number of frames in the sequence. Consequently a machine learning model based on providing blending coefficients for focal frames (with or with any context frames) will typically be easier to train and/or use, given the same available training data.
[0037] The machine learning model may be trained using supervised deep learning, with previously generated animation data and corresponding video game data being used as training data.
[0038] The previously generated animation data includes different sets of animations, with each set corresponding to a combination of animations that are at least in part to be simultaneously applied to a given animated object (the overlap may not be exact for example because the animations are of different lengths, or the transition from one sequence to another works best from a point partway through one or both animation sequences). The previously generated animation data also includes, for each animation in a given set, the blending coefficients associated with the respective animations. The blending coefficients will have been defined manually by an animator, during the animation process. The blending coefficients may be obtained from e.g. metadata that is stored in association with each animation. Alternatively, the blending coefficients may be obtained from the animations themselves, where e.g. each frame comprises a blending coefficient.
[0039] The previously generated animation data may comprise at least some of the bones and transforms defining each of the different movements performed by the bones of a character's skeleton. In some examples, the previously generated animation data may correspond to e.g. humanoid skeletons. In other examples, the previously generated animation data may correspond to a plurality of different skeleton types (and therefore having different or additional bones associated therewith), corresponding to e.g. different characters in a video game.
[0040] As mentioned above, the machine learning model is also trained with video game data corresponding with the animation data. The video game data comprises one or more in-game variables defining an in-game state of the animated object, wherein the animated object corresponds to the animated object to which the at least two animations in the previously generated animation data is to be applied. As above, the video game data may provide e.g. an indication of a physical state, velocity vector, emotive state, context associated with the animated obj ect.
[0041] The blending coefficients for a given set of animations may be dependent on multiple in-game variables, and the relationship may be different depending on different combinations of in-game variables. For example, it may be that e.g. a physical state corresponding to 'injured' takes precedence over an increased velocity vector, and so the degree of blending between a 'walking' and 'running' animation cycle needs to be blended more towards walking as opposed to running, or changes over the course of the transition to reflect the speed increase becoming more difficult for the injured character. By taking into account different in-game variables and / or combinations thereof, the machine learning model can be trained to learn a relationship between different in-game variables (and / or combinations) and corresponding blending coefficients. In some examples, such as that of the injured character above, this may result in a non-linear relationship between in-game variables and blending coefficients being determined by the machine learning model.
[0042] It will be appreciated that, by training the machine learning model with a large number of different animations (optionally for a large variation of skeletons), the model will be implicitly trained to recognize different animations. For example, the machine learning model may be trained to identify, for each animation in a set, a learnt animation (or representation thereof) that that animation corresponds to. This may correspond to learning a generic representation of different animation cycles and then determining, for a current animation (or representation thereof), a learnt representation that corresponds most closely with that animation, i.e. based on an associated confidence value. Once the animations in a given set have been identified, and the corresponding in-game variables are known, the trained machine learning model can then map the animations in the current set and in-game variable(s) associated therewith to one or more corresponding blending coefficients (i.e. using the learnt relationship).
[0043] At a fifth step S205, the at least two animations are blended together using the determined blending coefficient(s), the contribution of each animation in the final blended result being in accordance with the determined blending coefficient(s). In some examples, each animation is assigned a respective blending coefficient and the blending is performed based on each of the blending coefficients. In such examples, each blending coefficient is equal to or greater than zero and equal to or less than 1, i.e. the blending coefficient defines the fractional contribution of each animation in the final blended result. The blending may be performed by a game engine, for example. In Figure 3, the blended combination of animations is shown as output A12.The blended combination of the at least two animations may then be output for display at a display device. This may be output to e.g. an animator, for checking whether the result is a result that would be expected for that combination of animations and a corresponding in-game variable. In addition or alternatively, it may be that the animations and in-game variables are generated whilst the video game is being played, and that the animations are blended and output for display, in accordance with the determined blending coefficients).
[0044] Figure 3 shows schematically an example of a system 300 for determining blending coefficients for respective animations to be applied to an animated object. In Figure 3, the system 300 corresponds to a system that has been trained to determine blending coefficients in any of the manners described above.
[0045] The system 300 comprises an input unit 301 configured to obtain animation data and corresponding video game data. The animation data and corresponding video game data may correspond to any of the animation data and corresponding video game data described previously. For example, the animation data may define at least some of the bones and corresponding transforms for a given video game character, with the bones and corresponding transforms being defined for each of the frames of a respective animation in the animation data. The video game data may comprise at least one in-game variable defining a state of the character in the video game. In Figure 3, the animation data is shown as comprising two animations A1 and A2 and the corresponding video game data comprising an in-game variable G1. It will be appreciated that this is just an illustrative example and that in other examples, the animation data and video game data may comprise more than two animations and in-game variables respectively.
[0046] The obtained animation data and video game data is input to a modelling unit 302 (for example using any of the input forms described previously herein). The modelling unit 302 is trained to determine, based on the animation data and corresponding video game data, a blending coefficient for at least one of the animations in the animation data. Preferably, the modelling unit 392 is trained to determine a blending coefficient for each animation in the animation data. As described previously, it may be that each animation is assigned to a respective animation layer, and that the blending coefficient associated with a given animation layer is used by a game engine when rendering the combination of the animation layers. As before, the blending coefficient defines a relative weighting with which each animation is to be applied to the animated object.
[0047] The modelling unit 302 may be trained using supervised deep learning to determine a relationship between animation data and corresponding video game data. As described previously, the training may be performed using previously generated animation data and corresponding video game data, wherein the animation data includes a plurality of sets of animations that are to be applied to an animated object and the corresponding blending coefficients associated with each of the animations in a given set. The modelling unit 302 may comprise a neural network that is trained to map animation data and corresponding video game data to blending coefficients. Any suitable machine learning system that can be trained to map the relationship between animation data and corresponding video game data to blending coefficients may be used.
[0048] The modelling unit 302 is configured to output at least one blending coefficient for an animation in the obtained animation data. In Figure 3, this is shown as output 'b'. In preferred examples, the modelling unit 302 is configured to output a blending coefficient for each animation in the obtained animation data (e.g. for A1 and A2) (for example on a focal frame basis or a whole-animation basis).
[0049] The blending coefficient(s) output by the modelling unit is (are) then provided to a blending unit 303, that is configured to blend the animations in the animation data together, using the blending coefficients) output by the blending unit 303. The blending unit 303 may comprise a game engine that is configured to receive the blending coefficients associated with an (or each) animation layer in the animation data, and to render the animated object by blending the animations in accordance with the determined blending coefficients. The blended animation may be output at a display device (not shown) for example.
[0050] The techniques described herein may be implemented in hardware, software or combinations of the two as appropriate. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.
[0051] The examples described herein are to be understood as illustrative examples of embodiments of the invention. Further embodiments and examples are envisaged. Any feature described in relation to any one example or embodiment may be used alone or in combination with other features. In addition, any feature described in relation to any one example or embodiment may also be used in combination with one or more features of any other of the examples or embodiments, or any combination of any other of the examples or embodiments. Furthermore, equivalents and modifications not described herein may also be employed within the scope of the invention, which is defined in the claims
权利要求:
Claims (15)
[0001] A method of determining blending coefficients for respective animations, the method comprising:
obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to an animated object, each animation comprising a plurality of frames;
obtaining corresponding video game data, the video game data comprising an in-game state of the object;
inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for at least one of the animations in the animation data;
determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and
blending the at least simultaneously applied part of the two animations using the or each determined blending coefficient, the contribution from each of the at least two animations being in accordance with the or each determined blending coefficient.
[0002] A method according to claim 1, wherein the machine learning model is trained using supervised deep learning, the machine learning model being trained with previously generated animation data and corresponding video game data; andwherein the previously generated animation data includes combinations of animations applied to different animated objects and the corresponding blending coefficients used for animating each of the animated objects.
[0003] A method according to any preceding claim, wherein each animation is assigned to an animation layer and wherein the machine learning model is trained to determine a respective blending coefficient for each animation layer.
[0004] A method according to any preceding claim, wherein the animated object comprises a computer-generated character, the computer-generated character having a plurality of bones defining a skeleton of the character; andwherein each animation defines a different motion that is to be performed by at least some of the bones of the character.
[0005] A method according to claim 4, wherein the animation data comprises at least some of the bones of the computer-generated character and corresponding transforms, each transform defining at least one of a position, scale and orientation of a respective bone; andwherein the different motion of at least some of the character's bones for each animation is defined in terms of temporal variations in the transforms of those bones.
[0006] A method according to any preceding claim, wherein the in-game state of the object comprises one or more of:
i. a physical state of the animated object;
ii. a velocity vector associated with the animated object;
iv. an emotive state associated with the animated object; and
v. a context associated with the animated object.
[0007] A method according to any preceding claim, comprising blending the at least two animations using the determined blending coefficient, the contribution from each of the at least two animations being in accordance with the determined blending coefficient.
[0008] A method according to any preceding claim, comprising displaying the blended animation at a display device.
[0009] A computer program comprising computer-implemented instructions that, when run on a computer, cause the computer to implement the method of any of claims 1 to 8.
[0010] A system for determining blending coefficients for respective animations the system comprising:
an input unit configured to obtain animation data and corresponding video game data;
wherein the animation data comprises at least two animations that are at least in part to be applied simultaneously to the animated object, each animation comprising a plurality of frames;
wherein the video game data comprises at least one in-game variable defining a state of the object in a video game when the at least two animations are to be applied to the animated obj ect;
a modelling unit trained to determine, based on the animation data and corresponding video game data, one or more blending coefficients for each animation, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and
a blending unit configured to blend the at least simultaneously applied part of the two animations together, in accordance with the or each blending coefficient determined for each animation.
[0011] A system according to claim 10, wherein the modelling unit is trained using supervised deep learning to determine a relationship between animation data and corresponding video game data, the modelling unit being trained with previously generated animation data and corresponding video game data; andwherein the previously generated animation data comprises combinations of animations applied to different animated objects and the blending coefficients associated therewith.
[0012] A system according to claim 10 or claim 11, wherein the animated object comprises a computer-generated character, the computer-generated character having a plurality of bones defining a skeleton of the character; andwherein each animation defines a different motion that is to be performed by at least some of the bones of the character.
[0013] A system according to claim 12, wherein the animation data comprises at least some of the bones of the computer-generated character and corresponding transforms, each transform defining at least one of a position, scale and orientation of a respective bone; andwherein the different motion of at least some of the character's bones for each animation is defined in terms of temporal variations in the transforms of those bones.
[0014] A system according to any of claims 10 to 13, wherein the in-game variable comprises one or more of:
i. a physical state of the animated object;
ii. a velocity vector associated with the animated object;
iii. a stamina level associated with the animated object;
iv. an emotive state associated with the animated object; and
v. a context associated with the animated object.
[0015] A system according to any of claims 10 to 14, wherein the modelling unit comprises a neural network trained to map animation data and corresponding video game data to blending coefficients.
类似技术:
公开号 | 公开日 | 专利标题
EP2880633B1|2020-10-28|Animating objects using the human body
US9827496B1|2017-11-28|System for example-based motion synthesis
JP5661763B2|2015-01-28|Visual display representation method and system based on player representation
JP5635069B2|2014-12-03|Chain animation
US6191798B1|2001-02-20|Limb coordination system for interactive computer animation of articulated characters
McCann et al.2007|Responsive characters from motion fragments
Hodgins et al.1996|Simulation levels of detail for real-time animation
US8358310B2|2013-01-22|Musculo-skeletal shape skinning
EP1391846B1|2011-10-19|Image processing method, image processing apparatus, and program for emphasizing object movement
US8982122B2|2015-03-17|Real time concurrent design of shape, texture, and motion for 3D character animation
US8797328B2|2014-08-05|Automatic generation of 3D character animation from 3D meshes
JP5639646B2|2014-12-10|Real-time retargeting of skeleton data to game avatars
US6115053A|2000-09-05|Computer animation method and system for synthesizing human-like gestures and actions
Gleicher et al.2008|Snap-together motion: assembling run-time animations
US10930044B2|2021-02-23|Control system for virtual characters
US9251618B2|2016-02-02|Skin and flesh simulation using finite elements, biphasic materials, and rest state retargeting
US6057859A|2000-05-02|Limb coordination system for interactive computer animation of articulated characters with blended motion data
Da Silva et al.2009|Linear Bellman combination for control of character animation
US8830269B2|2014-09-09|Method and apparatus for deforming shape of three dimensional human body model
CN102822869B|2017-03-08|Capture view and the motion of the performer performed in the scene for generating
KR100496718B1|2005-06-23|System, method and apparatus for animating a character on a computer, server computer for downloading animation files, and computer implemented system for generating an animation character
JP4579904B2|2010-11-10|Apparatus and method for generating an action on an object
CN102458595B|2015-07-29|The system of control object, method and recording medium in virtual world
Yamane et al.2010|Animating non-humanoid characters with human motion data
US8860732B2|2014-10-14|System and method for robust physically-plausible character animation
同族专利:
公开号 | 公开日
US20200222804A1|2020-07-16|
GB2580615A|2020-07-29|
GB201900595D0|2019-03-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2020-06-19| PUAI| Public reference made under article 153(3) epc to a published international application that has entered the european phase|Free format text: ORIGINAL CODE: 0009012 |
2020-06-19| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
2020-07-22| 17P| Request for examination filed|Effective date: 20191222 |
2020-07-22| AK| Designated contracting states|Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
2020-07-22| AX| Request for extension of the european patent|Extension state: BA ME |
优先权:
申请号 | 申请日 | 专利标题
[返回顶部]